Video representation learning has been successful in video-text pre-training for zero-shot transfer, where each sentence is trained to be close to the paired video clips in a common feature space. For long videos, given a paragraph of description where the sentences describe different segments of the video, by matching all sentence-clip pairs, the paragraph and the full video are aligned implicitly. However, such unit-level similarity measure may ignore the global temporal context over a long time span, which inevitably limits the generalization ability. In this paper, we propose a contrastive learning framework TempCLR to compare the full video and the paragraph explicitly. As the video/paragraph is formulated as a sequence of clips/sentences, under the constraint of their temporal order, we use dynamic time warping to compute the minimum cumulative cost over sentence-clip pairs as the sequence-level distance. To explore the temporal dynamics, we break the consistency of temporal order by shuffling the video clips or sentences according to the temporal granularity. In this way, we obtain the representations for clips/sentences, which perceive the temporal information and thus facilitate the sequence alignment. In addition to pre-training on the video and paragraph, our approach can also generalize on the matching between different video instances. We evaluate our approach on video retrieval, action step localization, and few-shot action recognition, and achieve consistent performance gain over all three tasks. Detailed ablation studies are provided to justify the approach design.
translated by 谷歌翻译
Video event extraction aims to detect salient events from a video and identify the arguments for each event as well as their semantic roles. Existing methods focus on capturing the overall visual scene of each frame, ignoring fine-grained argument-level information. Inspired by the definition of events as changes of states, we propose a novel framework to detect video events by tracking the changes in the visual states of all involved arguments, which are expected to provide the most informative evidence for the extraction of video events. In order to capture the visual state changes of arguments, we decompose them into changes in pixels within objects, displacements of objects, and interactions among multiple arguments. We further propose Object State Embedding, Object Motion-aware Embedding and Argument Interaction Embedding to encode and track these changes respectively. Experiments on various video event extraction tasks demonstrate significant improvements compared to state-of-the-art models. In particular, on verb classification, we achieve 3.49% absolute gains (19.53% relative gains) in F1@5 on Video Situation Recognition.
translated by 谷歌翻译
GAN倒置旨在将输入图像倒入预训练GAN的潜在空间中。尽管GAN倒置最近取得了进步,但减轻失真和编辑性之间的权衡仍然存在挑战,即准确地重建输入图像并以较小的视觉质量下降来编辑倒置图像。最近提出的关键调整模型通过使用两步方法将输入图像转变为潜在代码,称为枢轴代码,然后改变生成器,以便可以准确映射输入图像,从而取得了重大进展,从而取得了重大进展。进入枢轴代码。在这里,我们表明可以通过适当的枢轴代码设计来改进重建和编辑性。我们提出了一种简单而有效的方法,称为“循环编码”,以提供高质量的枢轴代码。我们方法的关键思想是根据周期方案在不同空间中逐步训练编码器:w-> w+ - > w。该训练方法保留了W+空间的性质,即W+的低畸变的高编辑性。为了进一步减少失真,我们还建议使用基于优化的方法来完善枢轴代码,其中引入正则化项以减少编辑性的降解。对几种最新方法的定性和定量比较证明了我们方法的优势。
translated by 谷歌翻译
自动驾驶汽车(SDC)通常会实施感知管道,以检测周围的障碍并跟踪其移动轨迹,这为随后的驾驶决策过程奠定了基础。尽管对SDC中障碍物检测的安全性进行了深入的研究,但直到最近,攻击者才开始利用跟踪模块的脆弱性。与仅攻击对象探测器相比,这种新的攻击策略以更少的攻击预算更有效地影响了驾驶决策。但是,关于揭示的脆弱性在端到端的自动驾驶系统中是否仍然有效,以及如何减轻威胁。在本文中,我们介绍了SDC中对象跟踪安全性的第一个系统研究。通过一项全面的案例研究Baidu's Apollo的全面感知管道,我们证明了基于Kalman Filter(KF)的主流多对象跟踪器(MOT),即使具有启用的多种多样,传感器融合机制。我们的根本原因分析揭示了脆弱性是对基于KF的MOT设计的天生,该漏洞将错误地处理对象检测器的预测结果,但是当采用的KF算法易于在其与预测偏离的偏差时更容易相信该观察结果更大。为了解决这个设计缺陷,我们为基于KF的MOT提出了一个简单而有效的安全贴,其核心是一种适应性策略,可以平衡KF的重点在观测和预测上,根据观察预测偏差的异常指数,并具有针对广义劫持攻击模型的认证有效性。对基于$ 4 $ kf的现有MOT实施(包括2D和3D,学术和阿波罗的)的广泛评估验证了我们方法的防御效果和微不足道的绩效开销。
translated by 谷歌翻译
在本文中,我们提出了一种名为Matryoshka的新型内部攻击,该攻击采用无关紧要的计划与公开的DNN模型作为覆盖多个秘密模型的载体模型,以记住存储在本地数据中心中的私人ML数据的功能。我们没有将载体模型的参数视为位字符串并应用常规隐志,而是设计了一种新型参数共享方法,该方法利用了载体模型的学习能力来隐藏信息。同时实现Matryoshka:(i)高容量 - Matryoshka几乎没有实用性损失载体模型,可以隐藏一个26倍较大的秘密模型或8个跨越载体模型中不同应用程序域的不同体系结构的秘密模型,这两个模型都不能是使用现有的隐志技术完成; (ii)解码效率 - 一旦下载了已发布的运营商模型,外部颜色可以将隐藏的模型独家解码,只有几个整数秘密和隐藏模型体系结构的知识; (iii)有效性 - 此外,几乎所有恢复的模型的性能都与私人数据独立培训一样; (iv)鲁棒性 - 自然会实施信息冗余,以在出版前对载体上的常见后处理技术实现弹性; (v)秘密性 - 具有不同先验知识水平的模型检查员几乎不能将载体模型与正常模型区分开。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
在本文中,我们介绍了VCSL(视频复制段本地化),这是一种新的综合段级注释的视频复制数据集。与受视频级注释或小规模限制的现有复制检测数据集相比,VCSL不仅具有两个段级标签的数据级,其中有160k现实的视频副本对,其中包含超过280k的本地化copied seggment对,而且还包含超过280k涵盖各种视频类别和各种视频持续时间。每个收集的视频对中的所有复制段均经过手动提取,并伴随着精确注释的启动和结束时间戳。除了数据集外,我们还提出了一种新颖的评估协议,该协议可以更好地衡量视频对之间复制重叠段的预测准确性,并在不同情况下显示出改善的适应性。通过使用拟议的数据集和评估指标对几个基线和最先进的细分级视频副本检测方法进行基准测试,我们提供了一项全面的分析,可以揭示当前方法的优势和劣势作品。 VCSL数据集,公制和基准代码均在https://github.com/alipay/vcsl上公开获得。
translated by 谷歌翻译
深处神经网络(例如Deep-FSMN)已被广泛研究以用于关键字发现(KWS)应用。但是,这些网络的计算资源通常受到重大限制,因为它们通常在边缘设备上在通话中运行。在本文中,我们提出了BIFSMN,这是KWS的准确且极高的二元神经网络。我们首先为二进制化训练构建了高频增强蒸馏方案,该方案强调了全优先网络表示的高频信息,这对于对二进制网络的优化更为重要。然后,为了在运行时允许即时和自适应的准确性效率折衷,我们还提出了一个可稀薄的二进制架构,以从拓扑角度进一步解放二进制网络的加速潜力。此外,我们在ARMV8设备上为BIFSMN实施了快速的位计算内核,该内核充分利用了寄存器并增加了指令吞吐量以突破部署效率的极限。广泛的实验表明,BIFSMN通过说服各种数据集的利润率优于现有的二进制方法,甚至与全精度对应物相当(例如,语音命令v1-12下降少于3%)。我们强调的是,BIFSMN受益于稀薄的体系结构和优化的1位实现,可以在现实世界中的Edge硬件上实现令人印象深刻的22.3倍加速和15.5倍的存储空间。
translated by 谷歌翻译
在预先建立的3D环境图中,高精度摄像头重新定位技术是许多任务的基础,例如增强现实,机器人技术和自动驾驶。近几十年来,基于点的视觉重新定位方法已经发达了,但在某些不足的情况下不足。在本文中,我们设计了一条完整的管道,用于使用点和线的相机姿势完善,其中包含创新设计的生产线提取CNN,名为VLSE,线匹配和姿势优化方法。我们采用新颖的线表示,并根据堆叠的沙漏网络自定义混合卷积块,以检测图像上的准确稳定的线路功能。然后,我们采用基于几何的策略,使用表极约束和再投影过滤获得精确的2D-3D线对应关系。构建了以下点线关节成本函数,以通过基于纯点的本地化的初始粗姿势优化相机姿势。在开放数据集(即线框上的线提取器)上进行了足够的实验,在INLOC DUC1和DUC2上的定位性能,以确认我们的点线关节姿势优化方法的有效性。
translated by 谷歌翻译
End-to-end generative methods are considered a more promising solution for image restoration in physics-based vision compared with the traditional deconstructive methods based on handcrafted composition models. However, existing generative methods still have plenty of room for improvement in quantitative performance. More crucially, these methods are considered black boxes due to weak interpretability and there is rarely a theory trying to explain their mechanism and learning process. In this study, we try to re-interpret these generative methods for image restoration tasks using information theory. Different from conventional understanding, we analyzed the information flow of these methods and identified three sources of information (extracted high-level information, retained low-level information, and external information that is absent from the source inputs) are involved and optimized respectively in generating the restoration results. We further derived their learning behaviors, optimization objectives, and the corresponding information boundaries by extending the information bottleneck principle. Based on this theoretic framework, we found that many existing generative methods tend to be direct applications of the general models designed for conventional generation tasks, which may suffer from problems including over-invested abstraction processes, inherent details loss, and vanishing gradients or imbalance in training. We analyzed these issues with both intuitive and theoretical explanations and proved them with empirical evidence respectively. Ultimately, we proposed general solutions or ideas to address the above issue and validated these approaches with performance boosts on six datasets of three different image restoration tasks.
translated by 谷歌翻译